1 신경망 모형 1

1.1 뇌신경

1.2 신경망 모사

1.3 수식

\[ x \beta = \beta_0 + \beta_1X_i + \cdots + \beta_nX_n \\ \] \[ prob = {\frac{exp(x\beta)}{1 + exp (x\beta)}} \]

\[ prob = \frac{exp( \beta_0 + \beta_1X_i + \cdots + \beta_nX_n )} {1 + exp ( \beta_0 + \beta_1X_i + \cdots + \beta_nX_n)} \]

\[ prob = \frac {1} {1 + e^{ -( \beta_0 + \beta_1X_i + \cdots + \beta_nX_n) }} \]

2 이항 회귀모형

2.1 데이터

2.1.1 요약 통계량

Data summary
Name Piped data
Number of rows 20
Number of columns 2
_______________________
Column type frequency:
numeric 2
________________________
Group variables None

Variable type: numeric

skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist
학습시간 0 1 2.79 1.51 0.5 1.69 2.62 4.06 5.5 ▇▇▆▅▅
입학여부 0 1 0.50 0.51 0.0 0.00 0.50 1.00 1.0 ▇▁▁▁▇

2.2 시각화

2.3 모형


Call:  glm(formula = 입학여부 ~ 학습시간, family = "binomial", 
    data = lr_tbl)

Coefficients:
(Intercept)     학습시간  
     -4.078        1.505  

Degrees of Freedom: 19 Total (i.e. Null);  18 Residual
Null Deviance:      27.73 
Residual Deviance: 16.06    AIC: 20.06

2.4 합격 예측

2.5 구현

최우추정량(MLE)을 찾는 것은 - 우도(Likelihood)값을 구하는 것과 동일하기 General-purpose optimization 에 함수를 정의해서 모수 초기화하여 함께 넣어 반복적으로 근사시켜 모수를 계산한다.

$$ NLL(y) = -{(p(y))} \

_{} _y {-(p(y;))} \

_{} _y p(y;) $$

fn is  fn 
Looking for method =  Nelder-Mead 
Function has  2  arguments
Analytic gradient not made available.
Analytic Hessian not made available.
Scale check -- log parameter ratio= -Inf   log bounds ratio= NA 
Method:  Nelder-Mead 
  Nelder-Mead direct search function minimizer
function value for initial parameters = 13.862944
  Scaled convergence tolerance is 2.06574e-07
Stepsize computed as 0.100000
BUILD              3 13.887933 13.096825
EXTENSION          5 13.862944 12.582216
EXTENSION          7 13.096825 12.067907
EXTENSION          9 12.582216 11.285614
LO-REDUCTION      11 12.067907 11.285614
LO-REDUCTION      13 11.874408 11.285614
EXTENSION         15 11.469089 10.348151
LO-REDUCTION      17 11.285614 10.348151
EXTENSION         19 10.370274 8.914547
LO-REDUCTION      21 10.348151 8.914547
EXTENSION         23 9.266657 8.226456
REFLECTION        25 8.914547 8.030679
LO-REDUCTION      27 8.259889 8.030679
LO-REDUCTION      29 8.226456 8.030679
LO-REDUCTION      31 8.114076 8.030679
HI-REDUCTION      33 8.053435 8.030679
HI-REDUCTION      35 8.052924 8.030679
LO-REDUCTION      37 8.038012 8.030679
HI-REDUCTION      39 8.033349 8.030679
LO-REDUCTION      41 8.031439 8.030144
HI-REDUCTION      43 8.030679 8.030087
HI-REDUCTION      45 8.030144 8.029950
HI-REDUCTION      47 8.030087 8.029938
HI-REDUCTION      49 8.029950 8.029917
HI-REDUCTION      51 8.029938 8.029881
LO-REDUCTION      53 8.029917 8.029881
HI-REDUCTION      55 8.029890 8.029881
LO-REDUCTION      57 8.029889 8.029880
HI-REDUCTION      59 8.029881 8.029880
HI-REDUCTION      61 8.029880 8.029879
HI-REDUCTION      63 8.029880 8.029879
REFLECTION        65 8.029879 8.029879
Exiting from Nelder Mead minimizer
    67 function evaluations used
Post processing for method  Nelder-Mead 
Successful convergence! 
Compute Hessian approximation at finish of  Nelder-Mead 
Compute gradient approximation at finish of  Nelder-Mead 
Save results from method  Nelder-Mead 
$par
[1] -4.076953  1.504453

$value
[1] 8.029879

$message
NULL

$convcode
[1] 0

$fevals
function 
      67 

$gevals
gradient 
      NA 

$nitns
[1] NA

$kkt1
[1] TRUE

$kkt2
[1] TRUE

$xtimes
user.self 
     0.07 

Assemble the answers
       method        p1       p2
1 Nelder-Mead -4.076953 1.504453

glm() 함수로 구현한 것과 값이 동일한지 상호확인한다.

(Intercept)    학습시간 
  -4.077713    1.504645 

2.6 예측함수

 

데이터 과학자 이광춘 저작

kwangchun.lee.7@gmail.com